1,432 research outputs found

    Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli

    Get PDF
    In natural vision both stimulus features and task-demands affect an observer's attention. However, the relationship between sensory-driven (“bottom-up”) and task-dependent (“top-down”) factors remains controversial: Can task-demands counteract strong sensory signals fully, quickly, and irrespective of bottom-up features? To measure attention under naturalistic conditions, we recorded eye-movements in human observers, while they viewed photographs of outdoor scenes. In the first experiment, smooth modulations of contrast biased the stimuli's sensory-driven saliency towards one side. In free-viewing, observers' eye-positions were immediately biased toward the high-contrast, i.e., high-saliency, side. However, this sensory-driven bias disappeared entirely when observers searched for a bull's-eye target embedded with equal probability to either side of the stimulus. When the target always occurred in the low-contrast side, observers' eye-positions were immediately biased towards this low-saliency side, i.e., the sensory-driven bias reversed. Hence, task-demands do not only override sensory-driven saliency but also actively countermand it. In a second experiment, a 5-Hz flicker replaced the contrast gradient. Whereas the bias was less persistent in free viewing, the overriding and reversal took longer to deploy. Hence, insufficient sensory-driven saliency cannot account for the bias reversal. In a third experiment, subjects searched for a spot of locally increased contrast (“oddity”) instead of the bull's-eye (“template”). In contrast to the other conditions, a slight sensory-driven free-viewing bias prevails in this condition. In a fourth experiment, we demonstrate that at known locations template targets are detected faster than oddity targets, suggesting that the former induce a stronger top-down drive when used as search targets. Taken together, task-demands can override sensory-driven saliency in complex visual stimuli almost immediately, and the extent of overriding depends on the search target and the overridden feature, but not on the latter's free-viewing saliency

    Towards a Realistic MSSM Prediction for Neutrino-nucleon Deep-inelastic Scattering

    Get PDF
    We discuss the radiative corrections to charged and neutral current deep-inelastic neutrino--nucleon scattering in the minimal supersymmetric standard model (MSSM). In particular, we study deviations, delta_R^nu(nu-bar), from the Standard Model prediction for the ratios of neutral- to charged-current cross sections, taking into account all sources for deviations in the MSSM, i.e. different contributions from virtual Higgs bosons and virtual superpartners. Our calculation includes the full q^2 dependence of the one-loop amplitudes, parton distribution functions and a NuTeV-inspired process kinematics. We present results of a scan of delta_R^nu(nu-bar) over the relevant MSSM parameter space.Comment: 5 pages, 2 figures, to appear in Proceedings of SUSY06, the 14th International Conference on Supersymmetry and the Unification of Fundamental Interactions, UC Irvine, California, 12-17 June 200

    Predicting human gaze using low-level saliency combined with face detection

    Get PDF
    Under natural viewing conditions, human observers shift their gaze to allocate processing resources to subsets of the visual input. Many computational models try to predict such voluntary eye and attentional shifts. Although the important role of high level stimulus properties (e.g., semantic information) in search stands undisputed, most models are based on low-level image properties. We here demonstrate that a combined model of face detection and low-level saliency significantly outperforms a low-level model in predicting locations humans fixate on, based on eye-movement recordings of humans observing photographs of natural scenes, most of which contained at least one person. Observers, even when not instructed to look for anything particular, fixate on a face with a probability of over 80% within their first two fixations; furthermore, they exhibit more similar scanpaths when faces are present. Remarkably, our model’s predictive performance in images that do not contain faces is not impaired, and is even improved in some cases by spurious face detector responses

    Advanced Sensor and Dynamics Models with an Application to Sensor Management

    Get PDF

    Iconicity in instructional texts

    Get PDF
    Diagrammatic iconicity is usually investigated at the surface syntactic level of texts. In this paper, I try to show that a meaningful concept of iconicity cannot be found on this level in non-trivial instructional texts. Instead, we have to dive deeper into semantic and conceptual structure. I present a model of Conceptual Structure that can cope with the demands that understanding an instructional text puts on the reader, and after analyzing a concrete text (a cooking recipe), I show that the concept of control structure is of essential importance for the description of the mapping between a conceptual model and a text. Control structures can be expressed explicitly through linguistic means or be inherent to the semantics of lexical predicates. In both cases, the presence of a dynamic conceptual model is necessary in order to establish iconicity relations between the text and the underlying mental representation

    Hans-JĂĽrgen Heringer: Deutsche Syntax dependentiell. TĂĽbingen. Stauffenberg Verlag, 1996

    Get PDF
    • …
    corecore